Continuous Attractors in Recurrent Neural Networks and Phase Space Learning
نویسندگان
چکیده
Recurrent networks can be used as associative memories where the stored memories represent fixed points to which the dynamics of the network converges. These networks, however, also can present continuous attractors, as limit cycles and chaotic attractors. The use of these attractors in recurrent networks for the construction of associative memories is argued. Here, we provide a training algorithm for continuous attractors and present some numerical results of the learning method which involves genetic algorithms. Continuous attractors. A simple recurrent neural network can exhibit a diversity of dynamic behaviors. This diversity, which includes unstable states and continuous attractors, are particularly undesirable in the case associative memories as fixed attractors. Otherwise continuous attractors may be convenient for the construction of memories associated with patterns with continuous variability [1]. However, the convergence of the learning, in general, is not guaranteed on recurrent neural networks, and in particular on the learning of continuous attractors. Phase space learning. A learning based on adjusting the position of attractors in a phase space eliminates the dimension time on the learning. This makes easier the training: we need adjust the net parameter values in order to obtain the same phase portrait of a specific dynamic system. Any algorithm can be used to adjust the net parameter values. Here we choose a genetic algorithm. Genetic algorithm. The use of genetic algorithms for our training was motivated by a practical reason. The distance between the points of the desired orbit and the network output, supplies the function of fitness to be minimized. The techniques of variable mutation rate and élitism allowed to improve the efficiency of our approximations. Conclusion. Our method can approximate closed orbits in R2 by using an one-hidden-layer net. The viability Figure 1. Phase space learning of our method is independent of the used algorithm of learning. This study was limited to the learning of only one continuous attractor taken as a finite number of points. This number , however, can be taken as large as desired. For the learning at more attractors it becomes necessary the inclusion of more hidden layers on the net [1].
منابع مشابه
Learning Continuous Attractors in Recurrent Networks
One approach to invariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This ide...
متن کاملPhase-Space learning for recurrent networks
We study the problem of learning nonstatic attractors in recurrent networks. With concepts from dynamical systems theory, we show that this problem can be reduced to three sub-problems, (a) that of embedding the temporal trajectory in phase space, (b) approximating the local vector eld, and (c) function approximation using feedforward networks. This general framework overcomes problems with tra...
متن کاملPii: S0893-6080(98)00064-1
A recurrent neural network can possess multiple stable states, a property that many brain theories have implicated in learning and memory. There is good evidence for such multistability in the brainstem neural network that controls eye position. Because the stable states are arranged in a continuous dynamical attractor, the network can store a memory of eye position with analog neural encoding....
متن کاملLearning Cycles brings Chaos in Continuous Hopfield Networks
This paper aims at studying the impact of an hebbian learning algorithm on the recurrent neural network’s underlying dynamics. Two different kinds of learning are compared in order to encode information in the attractors of the Hopfield neural net: the storing of static patterns and the storing of cyclic patterns. We show that if the storing of static patterns leads to a reduction of the potent...
متن کاملChaos-guided Input Structuring for Improved Learning in Recurrent Neural Networks
Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations. However, it remains unclear how neural circuits encode complex spatio-temporal patterns. We show that neural dynamics are strongly influenced by the phase alignment between the input and the spontaneous chaotic activity. Input structuring along the dominant chaotic p...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2000